94 research outputs found

    How FAIR can you get? Image Retrieval as a Use Case to calculate FAIR Metrics

    Full text link
    A large number of services for research data management strive to adhere to the FAIR guiding principles for scientific data management and stewardship. To evaluate these services and to indicate possible improvements, use-case-centric metrics are needed as an addendum to existing metric frameworks. The retrieval of spatially and temporally annotated images can exemplify such a use case. The prototypical implementation indicates that currently no research data repository achieves the full score. Suggestions on how to increase the score include automatic annotation based on the metadata inside the image file and support for content negotiation to retrieve the images. These and other insights can lead to an improvement of data integration workflows, resulting in a better and more FAIR approach to manage research data.Comment: This is a preprint for a paper accepted for the 2018 IEEE conferenc

    Preserving Conversations with Contemporary Holocaust Witnesses

    Get PDF
    Conversations with Holocaust survivors are an integral part of education at schools and universities as well as part of the German memory culture. The goal of interactive stereoscopic digital Holocaust testimonies is to preserve the effects of meeting and interacting with these contemporary witnesses as faithfully as possible. These virtual humans are non-synthetic. Instead, all their actions, such as answers and movements, are pre-recorded. We conducted a preliminary study to gauge how people perceive this first German-speaking digital interactive Holocaust testimony. The focus of our investigation is the ease-of-use, the accuracy and relevance of the answers given as well as the authenticity and emotiveness of the virtual contemporary witness, as perceived by the participants. We found that digital 3D testimonies can convey emotions and lead to enjoyable experiences, which correlates with the frequency of correctly matched answers

    Folded Interaction Systems and Their Application to the Survivability Analysis of Unbounded Systems

    Get PDF
    Modeling the fulfillment of global properties like survivability is a challenging problem in unbounded systems such as Grids, peer-to-peer systems, or swarms. This paper proposes Folded Interaction Systems (FIS), an extension of the classic I-Systems framework, to overcome the modeling issues. FIS is applied to a case of survivability assessment in Grids and demonstrates the identification of essential capabilities, the modeling of harmful incidents, and the derivation of standard strategies to sustain the survival of a system’s mission. FIS is not restricted to survivability, it can be used for investigating the preservation of any global property

    Exploiting the Quantum Advantage for Satellite Image Processing: Review and Assessment

    Full text link
    This article examines the current status of quantum computing in Earth observation (EO) and satellite imagery. We analyze the potential limitations and applications of quantum learning models when dealing with satellite data, considering the persistent challenges of profiting from quantum advantage and finding the optimal sharing between high-performance computing (HPC) and quantum computing (QC). We then assess some parameterized quantum circuit models transpiled into a Clifford+T universal gate set. The T-gates shed light on the quantum resources required to deploy quantum models, either on an HPC system or several QC systems. In particular, if the T-gates cannot be simulated efficiently on an HPC system, we can apply a quantum computer and its computational power over conventional techniques. Our quantum resource estimation showed that quantum machine learning (QML) models, with a sufficient number of T-gates, provide the quantum advantage if and only if they generalize on unseen data points better than their classical counterparts deployed on the HPC system and they break the symmetry in their weights at each learning iteration like in conventional deep neural networks. We also estimated the quantum resources required for some QML models as an initial innovation. Lastly, we defined the optimal sharing between an HPC+QC system for executing QML models for hyperspectral satellite images. These are a unique dataset compared to other satellite images since they have a limited number of input qubits and a small number of labeled benchmark images, making them less challenging to deploy on quantum computers.Comment: It could be withdrawn if accepted in IEEE Transaction on Quantum Engineerin

    Exploiting the Quantum Advantage for Satellite Image Processing: Review and Assessment

    Get PDF
    This article examines the current status of quantum computing in Earth observation (EO) and satellite imagery. We analyze the potential limitations and applications of quantum learning models when dealing with satellite data, considering the persistent challenges of profiting from quantum advantage and finding the optimal sharing between high-performance computing (HPC) and quantum computing (QC). We then assess some parameterized quantum circuit models transpiled into a Clifford+T universal gate set. The T-gates shed light on the quantum resources required to deploy quantum models, either on an HPC system or several QC systems. In particular, if the T-gates cannot be simulated efficiently on an HPC system, we can apply a quantum computer and its computational power over conventional techniques. Our quantum resource estimation showed that quantum machine learning (QML) models, with a sufficient number of T-gates, provide the quantum advantage if and only if they generalize on unseen data points better than their classical counterparts deployed on the HPC system and they break the symmetry in their weights at each learning iteration like in conventional deep neural networks. We also estimated the quantum resources required for some QML models as an initial innovation. Lastly, we defined the optimal sharing between an HPC+QC system for executing QML models for hyperspectral satellite images. These are a unique dataset compared to other satellite images since they have a limited number of input qubits and a small number of labeled benchmark images, making them less challenging to deploy on quantum computers

    Quantum Transfer Learning for Real-World, Small, and High-Dimensional Datasets

    Get PDF
    Quantum machine learning (QML) networks promise to have some computational (or quantum) advantage for classifying supervised datasets (e.g., satellite images) over some conventional deep learning (DL) techniques due to their expressive power via their local effective dimension. There are, however, two main challenges regardless of the promised quantum advantage: 1) Currently available quantum bits (qubits) are very small in number, while real-world datasets are characterized by hundreds of high-dimensional elements (i.e., features). Additionally, there is not a single unified approach for embedding real-world high-dimensional datasets in a limited number of qubits. 2) Some real-world datasets are too small for training intricate QML networks. Hence, to tackle these two challenges for benchmarking and validating QML networks on real-world, small, and high-dimensional datasets in one-go, we employ quantum transfer learning composed of a multi-qubit QML network, and a very deep convolutional network (a with VGG16 architecture) extracting informative features from any small, high-dimensional dataset. We use real-amplitude and strongly-entangling N-layer QML networks with and without data re-uploading layers as a multi-qubit QML network, and evaluate their expressive power quantified by using their local effective dimension; the lower the local effective dimension of a QML network, the better its performance on unseen data. Our numerical results show that the strongly-entangling N-layer QML network has a lower local effective dimension than the real-amplitude QML network and outperforms it on the hard-to-classify three-class labelling problem. In addition, quantum transfer learning helps tackle the two challenges mentioned above for benchmarking and validating QML networks on real-world, small, and high-dimensional datasets.Comment: This article is submitted to IEEE TGRS. Hence, this version will be removed from ArXiv after published in this IEEE journa

    Visualization of climate simulation data in virtual reality using commercial game engines

    Get PDF
    Due to the size of its customer base the video game industry has long been the best-funded proponent of innovative real-time computer graphics. Many advancements in the field of computer graphics, software and hardware, have become cost-effective due to their use in video games, which in turn funded even further research and breakthroughs. Recent changes in the monetization of commercial game engines made their use in less revenue driven institutions affordable and, hence, possible. This allows us, given suitable hardware, to build and run computationally expensive fully interactive real-time visualizations at a fraction of the cost and time. We can thus investigate and explore the data in our virtual reality application far sooner. Additionally, we are able to spend more time to iteratively refine the user interaction as well as the preprocessing of the raw scientific data. We supply our visualization with the output data of ClimEx’ computational run on the SuperMUC. ClimEx is a research project that studies the effects of climate change on meteorological and hydrological extreme events. It features a multitude of climate-relevant variables and observes the time span between 1950 and 2100. For our use case we chose to compare three different precipitation events. Each event consists of simulated 60 hours of rainfall data anteceding a potential 100-year flood, which is a flood event that has an annual exceedance rate of 1%. The first event draws from historical data and represents the rain leading up to the 1999 Pentecost flood. We compare these data with two computer generated prospective events, which take place in 2060 and 2081, respectively. Since we wish to gain knowledge on strong local extrema as well as the comprehensive overall trend of the attributes, we chose to display the data in virtual reality. The virtually unlimited number of perspectives and points of view simplify investigating and understanding the three-dimensional data. We are also able to place the observer at the center of the data and empower them to interact with and steer the visualization in intuitive ways. By utilizing a tool like virtual reality, we are able to create an immersive, interactive and engaging user experience, which further facilitates the user’s ability to focus on the visual display and extract information from the displayed data. This allows users, especially non-expert users, to grasp the data we present in our visualization with less effort. In our paper we present the necessary steps to create an immersive virtual reality 3D visualization from raw scientific data based on our use case. This entails several aspects of pre-processing, a simple, suitable user interface as well as our solutions to the challenges we encountered
    corecore